skip to main content


Search for: All records

Creators/Authors contains: "Liu, Mingyan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available February 22, 2025
  2. Free, publicly-accessible full text available December 6, 2024
  3. Network games are commonly used to capture the strategic interactions among interconnected agents in simultaneous moves. The agents’ actions in a Nash equilibrium must take into account the mutual dependencies connecting them, which is typically obtained by solving a set of fixed point equations. Stackelberg games, on the other hand, model the sequential moves between agents that are categorized as leaders and followers. The corresponding solution concept, the subgame perfect equilibrium, is typically obtained using backward induction. Both game forms enjoy very wide use in the (cyber)security literature, the network game often as a template to study security investment and externality – also referred to as the Interdependent Security (IDS) games – and the Stackelberg game as a formalism to model a variety of attacker-defender scenarios. In this study we examine a model that combines both types of strategic reasoning: the interdependency as well as sequential moves. Specifically, we consider a scenario with a network of interconnected first movers (firms or defenders, whose security efforts and practices collectively determine the security posture of the eco-system) and one or more second movers, the attacker(s), who determine how much effort to exert on attacking the many potential targets. This gives rise to an equilibrium concept that embodies both types of equilibria mentioned above. We will examine how its existence and uniqueness conditions differ from that for a standard network game. Of particular interest are comparisons between the two game forms in terms of effort exerted by the defender(s) and the attacker(s), respectively, and the free-riding behavior among the defenders. 
    more » « less
  4. We study the design of a class of incentive mechanisms that can effectively prevent cheating in a strategic classification and regression problem. A conventional strategic classification or regression problem is modeled as a Stackelberg game, or a principal-agent problem between the designer of a classifier (the principal) and individuals subject to the classifier's decisions (the agents), potentially from different demographic groups. The former benefits from the accuracy of its decisions, whereas the latter may have an incentive to game the algorithm into making favorable but erroneous decisions. While prior works tend to focus on how to design an algorithm to be more robust to such strategic maneuvering, this study focuses on an alternative, which is to design incentive mechanisms to shape the utilities of the agents and induce effort that genuinely improves their skills, which in turn benefits both parties in the Stackelberg game. Specifically, the principal and the mechanism provider (which could also be the principal itself) move together in the first stage, publishing and committing to a classifier and an incentive mechanism. The agents are (simultaneous) second movers and best respond to the published classifier and incentive mechanism. When an agent's strategic action merely changes its observable features, it hurts the performance of the algorithm. However, if the action leads to improvement in the agent's true label, it not only helps the agent achieve better decision outcomes, but also preserves the performance of the algorithm. We study how a subsidy mechanism can induce improvement actions, positively impact a number of social well-being metrics, such as the overall skill levels of the agents (efficiency) and positive or true positive rate differences between different demographic groups (fairness). 
    more » « less
  5. Abstract BACKGROUND

    Early discrimination and prediction of cognitive decline are crucial for the study of neurodegenerative mechanisms and interventions to promote cognitive resiliency.

    METHODS

    Our research is based on resting‐state electroencephalography (EEG) and the current dataset includes 137 consensus‐diagnosed, community‐dwelling Black Americans (ages 60–90 years, 84 healthy controls [HC]; 53 mild cognitive impairment [MCI]) recruited through Wayne State University and Michigan Alzheimer's Disease Research Center. We conducted multiscale analysis on time‐varying brain functional connectivity and developed an innovative soft discrimination model in which each decision on HC or MCI also comes with a connectivity‐based score.

    RESULTS

    The leave‐one‐out cross‐validation accuracy is 91.97% and 3‐fold accuracy is 91.17%. The 9 to 18 months’ progression trend prediction accuracy over an availability‐limited subset sample is 84.61%.

    CONCLUSION

    The EEG‐based soft discrimination model demonstrates high sensitivity and reliability for MCI detection and shows promising capability in proactive prediction of people at risk of MCI before clinical symptoms may occur.

     
    more » « less
  6. null (Ed.)
    Network games provide a natural machinery to compactly represent strategic interactions among agents whose payoffs exhibit sparsity in their dependence on the actions of others. Besides encoding interaction sparsity, however, real networks often exhibit a multi-scale structure, in which agents can be grouped into communities, those communities further grouped, and so on, and where interactions among such groups may also exhibit sparsity. We present a general model of multi-scale network games that encodes such multi-level structure. We then develop several algorithmic approaches that leverage this multi-scale structure, and derive sufficient conditions for convergence of these to a Nash equilibrium. Our numerical experiments demonstrate that the proposed approaches enable orders of magnitude improvements in scalability when computing Nash equilibria in such games. For example, we can solve previously intractable instances involving up to 1 million agents in under 15 minutes. 
    more » « less
  7. null (Ed.)
    Service liability interconnections among networked IT and IoT-driven service organizations create potential channels for cascading service disruptions due to modern cybercrimes such as DDoS, APT, and ransomware attacks. These attacks are known to inflict cascading catastrophic service disruptions worth billions of dollars across organizations and critical infrastructure around the globe. Cyber-insurance is a risk management mechanism that is gaining increasing industry popularity to cover client (organization) risks after a cyber-attack. However, there is a certain likelihood that the nature of a successful attack is of such magnitude that an organizational client’s insurance provider is not able to cover the multi-party aggregate losses incurred upon itself by its clients and their descendants in the supply chain, thereby needing to re-insure itself via other cyber-insurance firms. To this end, one question worth investigating in the first place is whether an ecosystem comprising a set of profit-minded cyber-insurance companies, each capable of providing re-insurance services for a service-networked IT environment, is economically feasible to cover the aggregate cyber-losses arising due to a cyber-attack. Our study focuses on an empirically interesting case of extreme heavy tailed cyber-risk distributions that might be presenting themselves to cyber-insurance firms in the modern Internet age in the form of catastrophic service disruptions, and could be a possible standard risk distribution to deal with in the near IoT age. Surprisingly, as a negative result for society in the event of such catastrophes, we prove via a game-theoretic analysis that it may not be economically incentive compatible , even under i.i.d. statistical conditions on catastrophic cyber-risk distributions, for limited liability-taking risk-averse cyber-insurance companies to offer cyber re-insurance solutions despite the existence of large enough market capacity to achieve full cyber-risk sharing. However, our analysis theoretically endorses the popular opinion that spreading i.i.d. cyber-risks that are not catastrophic is an effective practice for aggregate cyber-risk managers, a result established theoretically and empirically in the past. A failure to achieve a working re-insurance market in critically demanding situations after catastrophic cyber-risk events strongly calls for centralized government regulatory action/intervention to promote risk sharing through re-insurance activities for the benefit of service-networked societies in the IoT age. 
    more » « less
  8. null (Ed.)